54 research outputs found

    Driven to Distraction: Self-Supervised Distractor Learning for Robust Monocular Visual Odometry in Urban Environments

    Full text link
    We present a self-supervised approach to ignoring "distractors" in camera images for the purposes of robustly estimating vehicle motion in cluttered urban environments. We leverage offline multi-session mapping approaches to automatically generate a per-pixel ephemerality mask and depth map for each input image, which we use to train a deep convolutional network. At run-time we use the predicted ephemerality and depth as an input to a monocular visual odometry (VO) pipeline, using either sparse features or dense photometric matching. Our approach yields metric-scale VO using only a single camera and can recover the correct egomotion even when 90% of the image is obscured by dynamic, independently moving objects. We evaluate our robust VO methods on more than 400km of driving from the Oxford RobotCar Dataset and demonstrate reduced odometry drift and significantly improved egomotion estimation in the presence of large moving vehicles in urban traffic.Comment: International Conference on Robotics and Automation (ICRA), 2018. Video summary: http://youtu.be/ebIrBn_nc-

    Proceedings of the Thirteenth International Society of Sports Nutrition (ISSN) Conference and Expo

    Get PDF
    Meeting Abstracts: Proceedings of the Thirteenth International Society of Sports Nutrition (ISSN) Conference and Expo Clearwater Beach, FL, USA. 9-11 June 201

    Robust lifelong visual navigation and mapping

    No full text
    The ability to precisely determine one’s location in within the world (localisation) is a key requirement for any robot wishing to navigate through the world. For long-term operation, such a localisation system must be robust to changes in the environment, both short term (eg. traffic, weather) and long term (eg. seasons). This thesis presents two methods for performing such localisation using cameras — small, cheap, lightweight sensors that are universally available. Whilst many image-based localisation systems have been proposed in the past, they generally rely on either feature matching, which fails under many degradations such as motion blur, or on photometric consistency, which fails under changing illumination. The methods we propose here directly align images with a dense prior map. The first method uses maps synthesised from a combination of LIDAR scanners to generate geometry and cameras to generate appearance, whilst the second uses vision for both mapping and localisation. Both make use of an information-theoretic metric, Normalised Information Distance (NID), for image alignment, relaxing the appearance constancy assumption inherent in photometric methods. Our methods require significant computational resources, but through the use of commodity GPUs, we are able to run them at a rate of 8-10Hz. Our GPU implementations make use of low level OpenGL, enabling compatibility across almost any GPU hardware. We also present a method for calibrating multi-sensor systems, enabling the joint use of cameras and LIDAR for mapping. Through experiments on both synthetic data and real-world data from over 100km of driving outdoors, we demonstrate the robustness of our localisation system to large variations in appearance. Comparisons with state-of-the-art feature-based and direct methods show that ours is significantly more robust, whilst maintaining similar precision.</p

    Robust lifelong visual navigation and mapping

    No full text
    The ability to precisely determine oneâs location in within the world (localisation) is a key requirement for any robot wishing to navigate through the world. For long-term operation, such a localisation system must be robust to changes in the environment, both short term (eg. traffic, weather) and long term (eg. seasons). This thesis presents two methods for performing such localisation using cameras â small, cheap, lightweight sensors that are universally available. Whilst many image-based localisation systems have been proposed in the past, they generally rely on either feature matching, which fails under many degradations such as motion blur, or on photometric consistency, which fails under changing illumination. The methods we propose here directly align images with a dense prior map. The first method uses maps synthesised from a combination of LIDAR scanners to generate geometry and cameras to generate appearance, whilst the second uses vision for both mapping and localisation. Both make use of an information-theoretic metric, Normalised Information Distance (NID), for image alignment, relaxing the appearance constancy assumption inherent in photometric methods. Our methods require significant computational resources, but through the use of commodity GPUs, we are able to run them at a rate of 8-10Hz. Our GPU implementations make use of low level OpenGL, enabling compatibility across almost any GPU hardware. We also present a method for calibrating multi-sensor systems, enabling the joint use of cameras and LIDAR for mapping. Through experiments on both synthetic data and real-world data from over 100km of driving outdoors, we demonstrate the robustness of our localisation system to large variations in appearance. Comparisons with state-of-the-art feature-based and direct methods show that ours is significantly more robust, whilst maintaining similar precision.</p

    Multiscale modelling and analysis of collective decision making in swarm robotics

    No full text
    We present a unified approach to describing certain types of collective decision making in swarm robotics that bridges from a microscopic individual-based description to aggregate properties. Our approach encompasses robot swarm experiments, microscopic and probabilistic macroscopic-discrete simulations as well as an analytic mathematical model. Following up on previous work, we identify the symmetry parameter, a measure of the progress of the swarm towards a decision, as a fundamental integrated swarm property and formulate its time evolution as a continuous-time Markov process. Contrary to previous work, which justified this approach only empirically and a posteriori, we justify it from first principles and derive hard limits on the parameter regime in which it is applicable

    Encapsulators: a new software paradigm in Smalltalk-80

    No full text

    Stationary probability distribution and splitting probability for a virtual swarm.

    No full text
    <p>(left) Stationary probability distribution as estimated from the simulation output (red markers) and computed from the FPE coefficients [Eq. (56) in <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0111542#app1" target="_blank">Appendix</a> 1.2]. (right) Splitting probability , as estimated directly from the simulation output [cf. Eq. (53) in <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0111542#app1" target="_blank">Appendix</a> 1.2] (red markers) and computed using the integrated stationary distribution [Eq. (54) in <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0111542#app1" target="_blank">Appendix</a> 1.2] (blue curve). Simulation parameters are given in <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0111542#pone-0111542-t001" target="_blank">Table 1</a>.</p

    Comparison of swarm properties for macroscopic-discrete simulations of the virtual swarm using different values of the avoidance radius .

    No full text
    <p>Shown are results for (blue curves and markers), (red curves and markers), (green curves and markers) and (black curves and markers). All other simulation parameters are given in <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0111542#pone-0111542-t003" target="_blank">table 3</a>. (top panels) Parameters (left) and (right) for the stochastic differential equation as inferred from the simulation results for the virtual swarm. (bottom left panel) Splitting probability , as estimated directly from the simulation output [cf. Eq. (53)] (markers) and computed using the integrated stationary distribution Eq. (54) (curves). (bottom right panel) Decision time as estimated from the simulation output [Eq. 58] (markers) and computed from the FPE coefficients [Eq. (59)] (curves).</p

    Setup for the kilobot experiments.

    No full text
    <p>The arrow marks the current heading of each bot.</p
    • …
    corecore